203 research outputs found
Sum-of-squares proofs and the quest toward optimal algorithms
In order to obtain the best-known guarantees, algorithms are traditionally
tailored to the particular problem we want to solve. Two recent developments,
the Unique Games Conjecture (UGC) and the Sum-of-Squares (SOS) method,
surprisingly suggest that this tailoring is not necessary and that a single
efficient algorithm could achieve best possible guarantees for a wide range of
different problems.
The Unique Games Conjecture (UGC) is a tantalizing conjecture in
computational complexity, which, if true, will shed light on the complexity of
a great many problems. In particular this conjecture predicts that a single
concrete algorithm provides optimal guarantees among all efficient algorithms
for a large class of computational problems.
The Sum-of-Squares (SOS) method is a general approach for solving systems of
polynomial constraints. This approach is studied in several scientific
disciplines, including real algebraic geometry, proof complexity, control
theory, and mathematical programming, and has found applications in fields as
diverse as quantum information theory, formal verification, game theory and
many others.
We survey some connections that were recently uncovered between the Unique
Games Conjecture and the Sum-of-Squares method. In particular, we discuss new
tools to rigorously bound the running time of the SOS method for obtaining
approximate solutions to hard optimization problems, and how these tools give
the potential for the sum-of-squares method to provide new guarantees for many
problems of interest, and possibly to even refute the UGC.Comment: Survey. To appear in proceedings of ICM 201
Quantum entanglement, sum of squares, and the log rank conjecture
For every , we give an
-time algorithm for the vs
\emph{Best Separable State (BSS)} problem of distinguishing, given
an matrix corresponding to a quantum measurement,
between the case that there is a separable (i.e., non-entangled) state
that accepts with probability , and the case that every
separable state is accepted with probability at most .
Equivalently, our algorithm takes the description of a subspace (where can be either the real or
complex field) and distinguishes between the case that contains a
rank one matrix, and the case that every rank one matrix is at least
far (in distance) from .
To the best of our knowledge, this is the first improvement over the
brute-force -time algorithm for this problem. Our algorithm is based
on the \emph{sum-of-squares} hierarchy and its analysis is inspired by Lovett's
proof (STOC '14, JACM '16) that the communication complexity of every rank-
Boolean matrix is bounded by .Comment: 23 pages + 1 title-page + 1 table-of-content
Rounding Sum-of-Squares Relaxations
We present a general approach to rounding semidefinite programming
relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our
approach is based on using the connection between these relaxations and the
Sum-of-Squares proof system to transform a *combining algorithm* -- an
algorithm that maps a distribution over solutions into a (possibly weaker)
solution -- into a *rounding algorithm* that maps a solution of the relaxation
to a solution of the original problem.
Using this approach, we obtain algorithms that yield improved results for
natural variants of three well-known problems:
1) We give a quasipolynomial-time algorithm that approximates the maximum of
a low degree multivariate polynomial with non-negative coefficients over the
Euclidean unit sphere. Beyond being of interest in its own right, this is
related to an open question in quantum information theory, and our techniques
have already led to improved results in this area (Brand\~{a}o and Harrow, STOC
'13).
2) We give a polynomial-time algorithm that, given a d dimensional subspace
of R^n that (almost) contains the characteristic function of a set of size n/k,
finds a vector in the subspace satisfying ,
where . Aside from being a natural relaxation, this
is also motivated by a connection to the Small Set Expansion problem shown by
Barak et al. (STOC 2012) and our results yield a certain improvement for that
problem.
3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time
algorithm with substantially improved guarantees for recovering a planted
-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n
nonzero coordinates, we can recover it with high probability whenever , improving for prior methods which
intrinsically required
Subsampling Mathematical Relaxations and Average-case Complexity
We initiate a study of when the value of mathematical relaxations such as
linear and semidefinite programs for constraint satisfaction problems (CSPs) is
approximately preserved when restricting the instance to a sub-instance induced
by a small random subsample of the variables. Let be a family of CSPs such
as 3SAT, Max-Cut, etc., and let be a relaxation for , in the sense
that for every instance , is an upper bound the maximum
fraction of satisfiable constraints of . Loosely speaking, we say that
subsampling holds for and if for every sufficiently dense instance and every , if we let be the instance obtained by
restricting to a sufficiently large constant number of variables, then
. We say that weak subsampling holds if the
above guarantee is replaced with whenever
. We show: 1. Subsampling holds for the BasicLP and BasicSDP
programs. BasicSDP is a variant of the relaxation considered by Raghavendra
(2008), who showed it gives an optimal approximation factor for every CSP under
the unique games conjecture. BasicLP is the linear programming analog of
BasicSDP. 2. For tighter versions of BasicSDP obtained by adding additional
constraints from the Lasserre hierarchy, weak subsampling holds for CSPs of
unique games type. 3. There are non-unique CSPs for which even weak subsampling
fails for the above tighter semidefinite programs. Also there are unique CSPs
for which subsampling fails for the Sherali-Adams linear programming hierarchy.
As a corollary of our weak subsampling for strong semidefinite programs, we
obtain a polynomial-time algorithm to certify that random geometric graphs (of
the type considered by Feige and Schechtman, 2002) of max-cut value
have a cut value at most .Comment: Includes several more general results that subsume the previous
version of the paper
Classical algorithms and quantum limitations for maximum cut on high-girth graphs
We study the performance of local quantum algorithms such as the Quantum
Approximate Optimization Algorithm (QAOA) for the maximum cut problem, and
their relationship to that of classical algorithms.
(1) We prove that every (quantum or classical) one-local algorithm achieves
on -regular graphs of girth a maximum cut of at most for . This is the first such result
showing that one-local algorithms achieve a value bounded away from the true
optimum for random graphs, which is for
. (2) We show that there is a classical -local algorithm
that achieves a value of for -regular
graphs of girth , where . This is an
algorithmic version of the existential bound of Lyons and is related to the
algorithm of Aizenman, Lebowitz, and Ruelle (ALR) for the
Sherrington-Kirkpatrick model. This bound is better than that achieved by the
one-local and two-local versions of QAOA on high-girth graphs. (3) Through
computational experiments, we give evidence that the ALR algorithm achieves
better performance than constant-locality QAOA for random -regular graphs,
as well as other natural instances, including graphs that do have short cycles.
Our experimental work suggests that it could be possible to extend beyond our
theoretical constraints. This points at the tantalizing possibility that
-local quantum maximum-cut algorithms might be *pointwise dominated* by
polynomial-time classical algorithms, in the sense that there is a classical
algorithm outputting cuts of equal or better quality *on every possible
instance*. This is in contrast to the evidence that polynomial-time algorithms
cannot simulate the probability distributions induced by local quantum
algorithms.Comment: 1+20 pages, 2 figures, code online at https://tiny.cc/QAOAvsAL
On Higher-Order Cryptography
Type-two constructions abound in cryptography: adversaries for encryption and authentication schemes, if active, are modeled as algorithms having access to oracles, i.e. as second-order algorithms. But how about making cryptographic schemes themselves higher-order? This paper gives an answer to this question, by first describing why higher-order cryptography is interesting as an object of study, then showing how the concept of probabilistic polynomial time algorithm can be generalized so as to encompass algorithms of order strictly higher than two, and finally proving some positive and negative results about the existence of higher-order cryptographic primitives, namely authentication schemes and pseudorandom functions
- …